AI Agents Now Capable of Exploiting Smart Contracts for Minimal Cost, Raising Security Concerns
Anthropic's Frontier Red Team has demonstrated that AI agents can autonomously exploit vulnerabilities in smart contracts for as little as $1.22 per contract. Over the past year, these agents were trained to mimic professional DeFi attackers, learning to fork blockchains, write exploit scripts, and drain liquidity pools—all within controlled Docker environments.
When tested against 34 real-world smart contract exploits post-March 2025, frontier models like Claude Opus 4.5, Sonnet 4.5, and GPT-5 successfully reconstructed 19 attacks, extracting $4.6 million in simulated value. The agents achieved this without prior knowledge of the vulnerabilities, relying solely on reasoning through contract logic and iterating on multi-step transactions.
The economic viability of such attacks is already evident. Anthropic's GPT-5 analyzed 2,849 BNB Chain ERC-20 contracts at an average cost of $1.22 per contract, uncovering two novel zero-day vulnerabilities with a simulated profit of $3,694. The findings suggest a looming threat to DeFi security, as AI-driven exploits become both accessible and cost-effective.